PAC-learning with approximate predictors

نویسندگان

چکیده

Abstract Approximate learning machines have become popular in the era of small devices, including quantised, factorised, hashed, or otherwise compressed predictors, and quest to explain guarantee good generalisation abilities for such methods has just begun. In this paper, we study role approximability learning, both full precision approximated settings. We do through a notion sensitivity predictors action approximation operator at hand. prove upper bounds on yielding following main findings, any PAC-learnable class given operator: (1) show that under mild conditions, approximable target concepts are learnable from smaller labelled sample, provided sufficient unlabelled data; (2) give algorithms predictor whose also enjoys same guarantees; (3) highlight natural examples structure sensitivities, which reduce, possibly even eliminate abundant requirement additional data, henceforth shed new light onto what makes one problem instance easier learn than another. These results embed scope modern model-compression approaches into general goal statistical theory, return suggests appropriate minimising uniform bounds.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

PAC Learning with Irrelevant Attributes

We consider the problem of learning in the presence of irrelevant attributes in Valiant's PAC model V84]. In the PAC model, the goal of the learner is to produce an approximately correct hypothesis from random sample data. If the number of relevant attributes in the target function is small, it may be desirable to produce a hypothesis that also depends on only a small number of variables. Hauss...

متن کامل

PAC Learning with Nasty Noise

We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generat...

متن کامل

PAC Learning with Simple Examples

We deene a new PAC learning model. In this model, examples are drawn according to the universal distribution m(: j f) of Solomomoo-Levin, where f is the target concept. The consequence is that the simple examples of the target concept have a high probability to be provided to the learning algorithm. We prove an Occam's Razor theorem. We show that the class of poly-term DNF is learnable, and the...

متن کامل

PAC Reinforcement Learning with an Imperfect Model

Reinforcement learning (RL) methods have proved to be successful in many simulated environments. The common approaches, however, are often too sample intensive to be applied directly in the real world. A promising approach to addressing this issue is to train an RL agent in a simulator and transfer the solution to the real environment. When a highfidelity simulator is available we would expect ...

متن کامل

PAC Reinforcement Learning with Rich Observations

We propose and study a new model for reinforcement learning with rich observations, generalizing contextual bandits to sequential decision making. These models require an agent to take actions based on observations (features) with the goal of achieving long-term performance competitive with a large set of policies. To avoid barriers to sample-efficient learning associated with large observation...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2023

ISSN: ['0885-6125', '1573-0565']

DOI: https://doi.org/10.1007/s10994-023-06301-4